With the Tokyo Olympics coming to an end, this seems to be a great time to look at how the athletes have performed over the years. Particularly, let us take a look at the pace of Marathon Gold Medal winners since 1896.
Which of the two images do you think better describe the expected trend and uncertainty of the runners' pace over the years?
The right-side plot which looks a better fit for current data is made using something known as a non-stationary kernel in Gaussian processes.
Let us first look at the basics of Gaussian Processes.
Gaussian Processes (GPs) are powerful machine learning methods, designed to solve classification and regression problems. GP methods additionally provide the uncertainty estimates along with predictions unlike other traditional methods. If you didn’t notice, Gaussian Processes are named after Gaussian distributions. In regression problems, such as finding a function $y = f(X)$ that can best describe the data $X$, we assume that the data is generated from a multivariate gaussian distributions (this is known as prior in GPs).
The multivariate Gaussian distribution is defined by a mean vector μ and a covariance matrix Σ.
The mean vector would give you the expected value, like any other regression model. Therefore, the covariance matrix makes the core of the GP regression models. The covariance functions (also called the kernels) involved in Σ describe the joint variability of the Gaussian process random variables.
Görtler, et al. provide an excellent visual exploration of Gaussian Processes with mathematical intuition as well as a deeper explaination of GP components. In this article, we will go over some shortcomings of standard GPs and talk about the extensions that can be built upon them.
Kernel is a key component in building GP models. Lets us look at an analogy to covariance functions or kernel functions or just kernels.
Assume we have a few children standing side-by-side in a line; and we ask them to move randomly back and forth. We’ll see that each child could be standing wherever he/she desires, unaffected by other children. However, if we do the same experiment with each child, holding the hands of the neighbouring child tightly; it would be observed that the neighbouring children would be standing quite closer to each other in the final configuration.
The initial set up for the experiment is analogous to low correlation among nearby data points while the later resembles the opposite. A kernel function in GP regression capturs the overall trend in data and generelizes over unseen data.
Radial basis function (RBF) (also known as Gaussian kernel) is an example of a widely used covariance function in GP modelling as described in Eq. 1.
\begin{align} K_{rbf}(\mathbf{x}_i, \mathbf{x}_j) &= \sigma^2 \exp\left(\frac{||\mathbf{x}_i - \mathbf{x}_j||_2^2}{2l^2}\right)\\ \end{align}There are a variety of kernels that can be used to model various kinds of functions. Following kernel parameters play a significant role in the modelling of a GP in most traditional kernels:
We discuss two broad categories of kernels, stationary and non-stationary in Section 4, and also compare their performances on standard datasets.
Now, let us visualize GPs fitted on some standard regression datasets.
Notice that noisy sine data is having uniform noise over the entire input region. We can also see that smoothness of sine function remains similar for any value of input $X$.
Now, we show the same model fit over a bit more comprex data.
There are two similarities between noisy sine curve dataset and noisy complex dataset: i) noise in data-points is uniform across $X$; ii) Underlying function that generates the dataset seems equally smooth (stationary) across $X$.
In real word, it is completely possible that datasets may not follow one or more of the above properties. Now, we will show the performance of stationary GPs on a real-world dataset.
Olympic Marathon dataset includes gold medal times for Olympic Marathon since 1896 to 2020. One of the noticable point about this dataset is that, in 1904, Marathon was badly organised leading to very slow times - Athletics at the 1904 St. Louis Summer Games: Men's Marathon.
Let us see how standard GP performs over this dataset. These fits are obtained by optimizing the likelihood function over the two important parameters: length-scale ($l$) and variance ($\sigma^2$).
From the above fit, we can see that data is more irregular or has higher noise till 1950, and after that, the trend in data becomes more clear and narrow. In other words, Noise in data is in decreasing order from left to right side of the plot. Predictive variance in the first fit is overestimated due to anomaly present in the year 1904. Once, we adjust the observation at 1904 with another value, the fit produces reasonable predictive variance.
If we think of an ideal fit for the original dataset, it should have decreasing predictive variance and increasing smoothness in fitted function as year increases. Standard or stationary GPs are not well-equipped internally to deal with such datasets. Such datasets are known as non-stationary and now we formally discuss stationarity and nonstationarity.
A definition of a stationairy process from Wikipedia is as the following,
The definition above also applies to space or any input space. Now, let us see what does it mean in context of Gaussian processes.
Till now, the learnt parameters are constant for the whole input space. This suggests that the model considers the distribution of the parameters as stationary, which might pose as a constraint for real datasets. The length-scale is an essential parameter in the RBF kernel as it is able to control the smoothness of the learnt distributions. A high length scale would mean the correlation term in RBF to be small, allowing us to get a smoother distribution. In contrast, a small length scale will enable us to capture high variance distributions of data.
Unlike stationary kernels, the non-stationary kernels are also dependent on the position of the input points along with the distance between them. So, how can we construct a kernel function in a way that can address these input-dependent variations?
Now that we have built up all the necessary intuition for Non-stationary GPs, let’s take a look at a unique way of introducing nonstationarity through varying length scales. LLS GP is a two tiered framework that allows the variation of length scales over the input space. More precisely, an additional independent GP is used to model the distribution of the length scale.
The varying length scale can allow us to adjust the amount of smoothness of the function for different input positions.
The above fit is representative of the extra power of non stationary GPs. We observe that the fit is able to capture the outlier by having a lower length scale there. Lower length scale values are indicative of the lesser covariance between the points around the outlier. This allows the model to be more non-smooth close to the outlier, while it is able to smoothen itself on later years, when the pace timings of athletes seems to slowly converge.
Let us see some more comparisons to better understand non stationary GPs.
This noise in this sine wave is introduced in proportion to the amplitude of the wave at that particular input. Hence, there is a lot more noise in the lower x positions. The NonStationary LLS GP is able to capture this high variance of the data in the lower x positions by having a lower length scale. And in the later positions it adjusts the length scale to a higher value which gets us a more smoother curve later on. However, the stationary GP, having its constraints, is not able to capture this varying variance of the data.
Smooth 1-D is a smoothly varying function with a substantial bump close to x = 0 . The figures here show the comparison of the stationary GP with the non stationary LLS GP. If we closely observe the learnt length scale figure, we can infer that the LLS GP evidently captures the existence of bump i.e, the area having lesser correlation with the other points, and has lower length scales as compared to both the sides of the bump. Although, the stationary GP here, seems to decently fit the data, we can observe that the smoothness of the trends does not remain intact near the outer regions (x < -2 and x > 2), whereas the LLS GP gives a much better generalization with help of the the learnt length scales.
Non-stationary GPs have shown great potential in overcoming the shortcomings of stationary GPs. In this article, we have just touched the tip of the iceberg of a massive area of research. We believe that through this article, the reader can get good intuition about where the research in GP is heading towards and how the use of non-stationary kernels can improve the performance of supervised learning.